Robust nonlinear system identification: Bayesian mixture of experts using the t-distribution
نویسندگان
چکیده
منابع مشابه
Robust mixture of experts modeling using the skew $t$ distribution
Mixture of Experts (MoE) is a popular framework in the fields of statistics and machine learning for modeling heterogeneity in data for regression, classification and clustering. MoE for continuous data are usually based on the normal distribution. However, it is known that for data with asymmetric behavior, heavy tails and atypical observations, the use of the normal distribution is unsuitable...
متن کاملRobust mixture modelling using the t distribution
Normal mixture models are being increasingly used to model the distributions of a wide variety of random phenomena and to cluster sets of continuous multivariate data. However, for a set of data containing a group or groups of observations with longer than normal tails or atypical observations, the use of normal components may unduly affect the fit of the mixture model. In this paper, we consid...
متن کاملRobust mixture regression using the t-distribution
The traditional estimation of mixture regression models is based on the normal assumption of component errors and thus is sensitive to outliers or heavy-tailed errors. A robust mixture regression model based on the t−distribution by extending the mixture of t−distributions to the regression setting is proposed. However, this proposed new mixture regression model is still not robust to high leve...
متن کاملRobust Mixture Regression Models Using T - Distribution
In this report, we propose a robust mixture of regression based on t-distribution by extending the mixture of t-distributions proposed by Peel and McLachlan (2000) to the regression setting. This new mixture of regression model is robust to outliers in y direction but not robust to the outliers with high leverage points. In order to combat this, we also propose a modified version of the propose...
متن کاملRobust Bayesian Inference Using the Mixture Priors
In Bayesian estimation the posterior distribution is proportional to likelihood function and the prior density of the parameter. Thus Bayesian inference, most often, is affected by the prior density. In this paper we look at how we can make Bayesian inference more robust against a poorly specified prior. We find that using a mixture of conjugate priors enables us to do this. We allow a small pr...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: Mechanical Systems and Signal Processing
سال: 2017
ISSN: 0888-3270
DOI: 10.1016/j.ymssp.2016.08.045